Goto

Collaborating Authors

 ethical ai principle


Free AI courses from the Turing Institute

AIHub

An introduction to the essentials on transparent machine learning for learners of diverse backgrounds to understand and apply transparent machine learning in real-world applications with confidence and trust. Learners should have a knowledge of basic maths and Python for machine learning.


Ethical Artificial Intelligence Principles and Guidelines for the Governance and Utilization of Highly Advanced Large Language Models

Hossain, Soaad, Ahmed, Syed Ishtiaque

arXiv.org Artificial Intelligence

Given the success of ChatGPT, LaMDA and other large language models (LLMs), there has been an increase in development and usage of LLMs within the technology sector and other sectors. While the level in which LLMs has not reached a level where it has surpassed human intelligence, there will be a time when it will. Such LLMs can be referred to as advanced LLMs. Currently, there are limited usage of ethical artificial intelligence (AI) principles and guidelines addressing advanced LLMs due to the fact that we have not reached that point yet. However, this is a problem as once we do reach that point, we will not be adequately prepared to deal with the aftermath of it in an ethical and optimal way, which will lead to undesired and unexpected consequences. This paper addresses this issue by discussing what ethical AI principles and guidelines can be used to address highly advanced LLMs.


Ethical AI isn't just how you build it, it's how you use it

#artificialintelligence

Lapses such as racially biased facial recognition or apparently sexist credit card approval algorithms have thankfully left companies asking how to build AI ethically. Many companies have released "ethical AI" guidelines, such as Microsoft's Responsible AI principles, which requires that AI systems be fair, inclusive, reliable and safe, transparent, respect privacy and security, and be accountable. These are laudable, and will help prevent the harms listed above. Harm can result from what a system is used for, not from unfairness, black-boxyness, or other implementation details. Consider an autonomous Uber: if they are able to recognize people using wheelchairs less accurately than people walking, this can be fixed by using training data reflective of the many ways people traverse a city to build a more fair system.


AI experts question tech industry's ethical commitments

#artificialintelligence

From healthcare and education to finance and policing, artificial intelligence (AI) is becoming increasingly embedded in people's daily lives. Despite being posited by advocates as a dispassionate and fairer means of making decisions, free from the influence of human prejudice, the rapid development and deployment of AI has prompted concern over how the technology can be used and abused. These concerns include how it affects people's employment opportunities, its potential to enable mass surveillance, and its role in facilitating access to basic goods and services, among others. In response, the organisations that design, develop and deploy AI technologies – often with limited input from those most affected by its operation – have attempted to quell people's fears by setting out how they are approaching AI in a fair and ethical manner. Since around 2018, this has led to a deluge of ethical AI principles, guidelines, frameworks and declarations being published by both private organisations and government agencies around the world.


Introducing the Artificial Intelligence Ethics Playbook from GSMA

#artificialintelligence

Artificial intelligence is a powerful, emerging force that is transforming business and society. The potential of these technologies to unlock benefits for organisations and society is only beginning to be seen. AI can help organisations to improve prediction, optimise operations, allocate resources more efficiently, and personalise digital solutions. PwC estimates AI could contribute $15.7 trillion to the global economy by 20302. However, artificial intelligence isn't a futuristic technology; it is present in our everyday lives, used across a wide variety of industries.


Ten principles for ethical AI

#artificialintelligence

If you're taking a long-term approach to artificial intelligence (AI), you're likely thinking about how to make your AI systems ethical. Building ethical AI is the right thing to do. Not only do your corporate values demand it, it's also one of the ideal ways to help minimise risks that range from compliance failures to brand damage. But building ethical AI is hard. The difficulty starts with a question: what is ethical AI?


9 ethical AI principles for organizations to follow

#artificialintelligence

Organizations around the globe are becoming more aware of the risks artificial intelligence (AI) may pose, including bias and potential job loss due to automation. At the same time, AI is providing many tangible benefits for organizations and society. For organization, this is creating a fine line between the potential harm AI might cause and the costs of not adopting the technology. Three emerging practices can help organizations navigate the complex world of moral dilemmas created by autonomous and intelligent systems. AI risks continue to grow, but so does the number of public and private organizations that are releasing ethical principles to guide the development and use of AI.


Deputy Defense Secretary Outlines Responsible AI Tenets in New Memo

#artificialintelligence

The Joint Artificial Intelligence Center will lead implementation of responsible AI across the Defense Department, according to a new directive. In a departmentwide memo signed last week, Deputy Defense Secretary Kathleen Hicks enumerated foundational tenets for responsible AI, reaffirmed the ethical AI principles the department adopted last year, and mandated the JAIC director start work on four activities for developing a responsible AI ecosystem. "As the DoD embraces artificial intelligence (AI), it is imperative that we adopt responsible behavior, processes, and outcomes in a manner that reflects the Department's commitment to its ethical principles, including the protection of privacy and civil liberties," Hicks said in the memo, which was announced June 1. "A trusted ecosystem not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public." Hicks assigned the JAIC director to coordinate responsible AI through a working council, which must in turn hammer out a strategy and implementation pathway, create a talent management framework, and report on how responsible AI can be integrated into acquisitions.


AI and Ethics -- Operationalising Responsible AI

Zhu, Liming, Xu, Xiwei, Lu, Qinghua, Governatori, Guido, Whittle, Jon

arXiv.org Artificial Intelligence

In the last few years, AI continues demonstrating its positive impact on society while sometimes with ethically questionable consequences. Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation. This chapter discusses the challenges related to operationalizing ethical AI principles and presents an integrated view that covers high-level ethical AI principles, the general notion of trust/trustworthiness, and product/process support in the context of responsible AI, which helps improve both trust and trustworthiness of AI for a wider set of stakeholders.


3 ways AI can improve disaster resilience and relief efforts

#artificialintelligence

First, enhance collaboration between current initiatives, focused on specific use cases between a few partners, into a more impact-focused network of AI-driven disaster support. The attention currently devoted to developing algorithms should be balanced with at least as much energy and resources to make sure these tools are widely available and used on the front line of disaster relief. In many cases, that means more capability building. We also see duplication of efforts, with the data science community working on similar use cases, which could be streamlined. One option might be to establish a domain-specific partnership or coalition across which industry and global agencies would coordinate focused development teams, as just one model. Second, in the near term, develop more basic data capture and coordination tools across different agencies on the ground, rather than focusing the majority of investment on highly advanced AI.